Goto

Collaborating Authors

 discovery phase





Discrete Dictionary-based Decomposition Layer for Structured Representation Learning

Park, Taewon, Kim, Hyun-Chul, Lee, Minho

arXiv.org Artificial Intelligence

Neuro-symbolic neural networks have been extensively studied to integrate symbolic operations with neural networks, thereby improving systematic generalization. Specifically, Tensor Product Representation (TPR) framework enables neural networks to perform differentiable symbolic operations by encoding the symbolic structure of data within vector spaces. However, TPR-based neural networks often struggle to decompose unseen data into structured TPR representations, undermining their symbolic operations. To address this decomposition problem, we propose a Discrete Dictionary-based Decomposition (D3) layer designed to enhance the decomposition capabilities of TPR-based models. D3 employs discrete, learnable key-value dictionaries trained to capture symbolic features essential for decomposition operations. It leverages the prior knowledge acquired during training to generate structured TPR representations by mapping input data to pre-learned symbolic features within these dictionaries. D3 is a straightforward drop-in layer that can be seamlessly integrated into any TPR-based model without modifications. Our experimental results demonstrate that D3 significantly improves the systematic generalization of various TPR-based models while requiring fewer additional parameters. Notably, D3 outperforms baseline models on the synthetic task that demands the systematic decomposition of unseen combinatorial data.


What Is Project Discovery Phase, Why it Matters, and How to Run It?

#artificialintelligence

No worries, we've got a remedy for that: the discovery phase. With this homework adequately done, you will enjoy it the smooth way. So, let us dive right into it. Before we delve into the discovery phase of software development, let's start with the broader perspective to understand the fundamental idea behind it better. Usually, a project's lifecycle includes the following stages: initiation, planning, execution, control, and closure.


Digital Workforce Unlocks Business Opportunities Worth Six Figures for Planet Mark

#artificialintelligence

Digital Workforce Solutions PLC, the leading provider of intelligent automation (IA) services, has helped sustainability certification programme Planet Mark identify significant growth opportunities with a six-figure business value. Wanting to capitalise on growing demand to measure and quantify the impact of sustainability projects, Planet Mark needed to increase the productivity of its team of highly skilled analysts. With hiring new staff a costly and time consuming process, Planet Mark identified automation as a potential option to free up staff to focus on more complex tasks. To ensure Planet Mark fully understood what it could achieve through automation, Digital Workforce carried out a discovery phase highlighting the processes which could be automated. In total the discovery identified in the region of 1,000 reports, with a business value of six figures, which led Planet Mark to reassess its overall strategy based on what it could achieve through automation.


Does Artificial Intelligence Design New Drugs Or Discover Them?

#artificialintelligence

In mathematics, there is this age-old question of whether new math is discovered or invented. It makes sense to ask the same sort of question about modern drug discovery. When using artificial intelligence to identify drug candidates, are these new drug candidates being developed, or simply exposed through a process of narrowing down the possibilities using mathematics and science? Are these new drug candidates discovered or designed? Biotechnology company Moderna protocol files for COVID-19 vaccinations are kept at the Research ... [ ] Centers of America in Hollywood, Florida, on August 13, 2020.


The Anatomy of AI: Understanding Data Processing Tasks

#artificialintelligence

But as your data scientists and data engineers quickly realize, building a production AI system is a lot easier said than done, and there are many steps to master before you get that ML magic. At a high level, the anatomy of AI is fairly simple. You start with some data, train a machine learning model upon it, and then position the model to infer on real-world data. Unfortunately, as the old saying goes, the devil is in the details. And in the case of AI, there are a lot of small details you have to get right before you can claim victory.


Grounded Language Learning Fast and Slow

Hill, Felix, Tieleman, Olivier, von Glehn, Tamara, Wong, Nathaniel, Merzic, Hamza, Clark, Stephen

arXiv.org Artificial Intelligence

Recent work has shown that large text-based neural language models, trained with conventional supervised learning objectives, acquire a surprising propensity for few- and one-shot learning. Here, we show that an embodied agent situated in a simulated 3D world, and endowed with a novel dual-coding external memory, can exhibit similar one-shot word learning when trained with conventional reinforcement learning algorithms. After a single introduction to a novel object via continuous visual perception and a language prompt ("This is a dax"), the agent can re-identify the object and manipulate it as instructed ("Put the dax on the bed"). In doing so, it seamlessly integrates short-term, within-episode knowledge of the appropriate referent for the word "dax" with long-term lexical and motor knowledge acquired across episodes (i.e. "bed" and "putting"). We find that, under certain training conditions and with a particular memory writing mechanism, the agent's one-shot word-object binding generalizes to novel exemplars within the same ShapeNet category, and is effective in settings with unfamiliar numbers of objects. We further show how dual-coding memory can be exploited as a signal for intrinsic motivation, stimulating the agent to seek names for objects that may be useful for later executing instructions. Together, the results demonstrate that deep neural networks can exploit meta-learning, episodic memory and an explicitly multi-modal environment to account for 'fast-mapping', a fundamental pillar of human cognitive development and a potentially transformative capacity for agents that interact with human users.


AI Week: How government learned to stop worrying and love AI

#artificialintelligence

It is often the case that, if you look a little too closely at the next big thing in the technology industry, it may start to resemble the last big thing. Or perhaps even one of the many big things that came before that. When many were congratulating Apple for inventing the tablet market with the release of the iPad in 2010, many more were pointing to the touchscreen devices manufactured over the previous two decades by the likes of Palm, FSC, and Nokia – not to mention Apple itself. When the concept of cloud computing began to go mainstream, there were plenty of onlookers who wondered whether this exciting new concept was really just a synonym for the internet, or virtualisation, or software as a service or, simply, 'someone else's computer'. Artificial intelligence – perhaps the biggest and nextest of the current next big things – is also nothing new, many would argue.